Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Brain Sci ; 10(11)2020 Nov 02.
Artigo em Inglês | MEDLINE | ID: mdl-33147691

RESUMO

The efficacy of audiovisual (AV) integration is reflected in the degree of cross-modal suppression of the auditory event-related potentials (ERPs, P1-N1-P2), while stronger semantic encoding is reflected in enhanced late ERP negativities (e.g., N450). We hypothesized that increasing visual stimulus reliability should lead to more robust AV-integration and enhanced semantic prediction, reflected in suppression of auditory ERPs and enhanced N450, respectively. EEG was acquired while individuals watched and listened to clear and blurred videos of a speaker uttering intact or highly-intelligible degraded (vocoded) words and made binary judgments about word meaning (animate or inanimate). We found that intact speech evoked larger negativity between 280-527-ms than vocoded speech, suggestive of more robust semantic prediction for the intact signal. For visual reliability, we found that greater cross-modal ERP suppression occurred for clear than blurred videos prior to sound onset and for the P2 ERP. Additionally, the later semantic-related negativity tended to be larger for clear than blurred videos. These results suggest that the cross-modal effect is largely confined to suppression of early auditory networks with weak effect on networks associated with semantic prediction. However, the semantic-related visual effect on the late negativity may have been tempered by the vocoded signal's high-reliability.

2.
Multisens Res ; 33(3): 277-294, 2020 02 28.
Artigo em Inglês | MEDLINE | ID: mdl-32508080

RESUMO

Lip-reading improves intelligibility in noisy acoustical environments. We hypothesized that watching mouth movements benefits speech comprehension in a 'cocktail party' by strengthening the encoding of the neural representations of the visually paired speech stream. In an audiovisual (AV) task, EEG was recorded as participants watched and listened to videos of a speaker uttering a sentence while also hearing a concurrent sentence by a speaker of the opposite gender. A key manipulation was that each audio sentence had a 200-ms segment replaced by white noise. To assess comprehension, subjects were tasked with transcribing the AV-attended sentence on randomly selected trials. In the auditory-only trials, subjects listened to the same sentences and completed the same task while watching a static picture of a speaker of either gender. Subjects directed their listening to the voice of the gender of the speaker in the video. We found that the N1 auditory-evoked potential (AEP) time-locked to white noise onsets was significantly more inhibited for the AV-attended sentences than for those of the auditorily-attended (A-attended) and AV-unattended sentences. N1 inhibition to noise onsets has been shown to index restoration of phonemic representations of degraded speech. These results underscore that attention and congruency in the AV setting help streamline the complex auditory scene, partly by reinforcing the neural representations of the visually attended stream, heightening the perception of continuity and comprehension.


Assuntos
Percepção Auditiva , Potenciais Evocados Auditivos , Leitura Labial , Ruído , Percepção da Fala , Atenção/fisiologia , Feminino , Humanos , Idioma , Masculino
3.
Eur J Neurosci ; 48(8): 2836-2848, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-29363844

RESUMO

We tested the predictions of the dynamic reweighting model (DRM) of audiovisual (AV) speech integration, which posits that spectrotemporally reliable (informative) AV speech stimuli induce a reweighting of processing from low-level to high-level auditory networks. This reweighting decreases sensitivity to acoustic onsets and in turn increases tolerance to AV onset asynchronies (AVOA). EEG was recorded while subjects watched videos of a speaker uttering trisyllabic nonwords that varied in spectrotemporal reliability and asynchrony of the visual and auditory inputs. Subjects judged the stimuli as in-sync or out-of-sync. Results showed that subjects exhibited greater AVOA tolerance for non-blurred than blurred visual speech and for less than more degraded acoustic speech. Increased AVOA tolerance was reflected in reduced amplitude of the P1-P2 auditory evoked potentials, a neurophysiological indication of reduced sensitivity to acoustic onsets and successful AV integration. There was also sustained visual alpha band (8-14 Hz) suppression (desynchronization) following acoustic speech onsets for non-blurred vs. blurred visual speech, consistent with continuous engagement of the visual system as the speech unfolds. The current findings suggest that increased spectrotemporal reliability of acoustic and visual speech promotes robust AV integration, partly by suppressing sensitivity to acoustic onsets, in support of the DRM's reweighting mechanism. Increased visual signal reliability also sustains the engagement of the visual system with the auditory system to maintain alignment of information across modalities.


Assuntos
Estimulação Acústica/métodos , Ritmo alfa/fisiologia , Percepção Auditiva/fisiologia , Rede Nervosa/fisiologia , Estimulação Luminosa/métodos , Percepção Visual/fisiologia , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Reprodutibilidade dos Testes , Adulto Jovem
4.
J Neurosci ; 38(7): 1835-1849, 2018 02 14.
Artigo em Inglês | MEDLINE | ID: mdl-29263241

RESUMO

Audiovisual (AV) integration is essential for speech comprehension, especially in adverse listening situations. Divergent, but not mutually exclusive, theories have been proposed to explain the neural mechanisms underlying AV integration. One theory advocates that this process occurs via interactions between the auditory and visual cortices, as opposed to fusion of AV percepts in a multisensory integrator. Building upon this idea, we proposed that AV integration in spoken language reflects visually induced weighting of phonetic representations at the auditory cortex. EEG was recorded while male and female human subjects watched and listened to videos of a speaker uttering consonant vowel (CV) syllables /ba/ and /fa/, presented in Auditory-only, AV congruent or incongruent contexts. Subjects reported whether they heard /ba/ or /fa/. We hypothesized that vision alters phonetic encoding by dynamically weighting which phonetic representation in the auditory cortex is strengthened or weakened. That is, when subjects are presented with visual /fa/ and acoustic /ba/ and hear /fa/ (illusion-fa), the visual input strengthens the weighting of the phone /f/ representation. When subjects are presented with visual /ba/ and acoustic /fa/ and hear /ba/ (illusion-ba), the visual input weakens the weighting of the phone /f/ representation. Indeed, we found an enlarged N1 auditory evoked potential when subjects perceived illusion-ba, and a reduced N1 when they perceived illusion-fa, mirroring the N1 behavior for /ba/ and /fa/ in Auditory-only settings. These effects were especially pronounced in individuals with more robust illusory perception. These findings provide evidence that visual speech modifies phonetic encoding at the auditory cortex.SIGNIFICANCE STATEMENT The current study presents evidence that audiovisual integration in spoken language occurs when one modality (vision) acts on representations of a second modality (audition). Using the McGurk illusion, we show that visual context primes phonetic representations at the auditory cortex, altering the auditory percept, evidenced by changes in the N1 auditory evoked potential. This finding reinforces the theory that audiovisual integration occurs via visual networks influencing phonetic representations in the auditory cortex. We believe that this will lead to the generation of new hypotheses regarding cross-modal mapping, particularly whether it occurs via direct or indirect routes (e.g., via a multisensory mediator).


Assuntos
Compreensão/fisiologia , Fonética , Percepção da Fala/fisiologia , Estimulação Acústica , Córtex Auditivo , Percepção Auditiva/fisiologia , Eletroencefalografia , Potenciais Evocados Auditivos , Feminino , Humanos , Ilusões/psicologia , Individualidade , Idioma , Lábio/fisiologia , Masculino , Estimulação Luminosa , Tempo de Reação/fisiologia , Percepção Visual/fisiologia , Adulto Jovem
5.
Lang Cogn Neurosci ; 32(9): 1102-1118, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28966930

RESUMO

We examined the relationship between tolerance for audiovisual onset asynchrony (AVOA) and the spectrotemporal fidelity of the spoken words and the speaker's mouth movements. In two experiments that only varied in the temporal order of sensory modality, visual speech leading (exp1) or lagging (exp2) acoustic speech, participants watched intact and blurred videos of a speaker uttering trisyllabic words and nonwords that were noise vocoded with 4-, 8-, 16-, and 32-channels. They judged whether the speaker's mouth movements and the speech sounds were in-sync or out-of-sync. Individuals perceived synchrony (tolerated AVOA) on more trials when the acoustic speech was more speech-like (8 channels and higher vs. 4 channels), and when visual speech was intact than blurred (exp1 only). These findings suggest that enhanced spectrotemporal fidelity of the audiovisual (AV) signal prompts the brain to widen the window of integration promoting the fusion of temporally distant AV percepts.

6.
Sci Rep ; 6: 32065, 2016 09 12.
Artigo em Inglês | MEDLINE | ID: mdl-27616188

RESUMO

The phase of prestimulus oscillations at 7-10 Hz has been shown to modulate perception of briefly presented visual stimuli. Specifically, a recent combined EEG-fMRI study suggested that a prestimulus oscillation at around 7 Hz represents open and closed windows for perceptual integration by modulating connectivity between lower order occipital and higher order parietal brain regions. We here utilized brief event-related transcranial alternating current stimulation (tACS) to specifically modulate this prestimulus 7 Hz oscillation, and the synchrony between parietal and occipital brain regions. To this end we tested for a causal role of this particular prestimulus oscillation for perceptual integration. The EEG was acquired at the same time allowing us to investigate frequency specific after effects phase-locked to stimulation offset. On a behavioural level our results suggest that tACS did modulate perceptual integration, however, in an unexpected manner. On an electrophysiological level our results suggest that brief tACS does induce oscillatory entrainment, as visible in frequency specific activity phase-locked to stimulation offset. Together, our results do not strongly support a causal role of prestimulus 7 Hz oscillations for perceptual integration. However, our results suggest that brief tACS is capable of modulating oscillatory activity in a temporally sensitive manner.


Assuntos
Córtex Cerebral/fisiologia , Sincronização Cortical , Percepção Visual/fisiologia , Adulto , Ondas Encefálicas , Eletroencefalografia , Feminino , Humanos , Masculino , Estimulação Luminosa , Estimulação Transcraniana por Corrente Contínua , Adulto Jovem
7.
Curr Biol ; 25(2): R76-R77, 2015 Jan 19.
Artigo em Inglês | MEDLINE | ID: mdl-25602309

RESUMO

What we hear can rapidly alter what we see. A new study provides evidence for a mechanism in which 10 Hz oscillations in the visual system define the time window for integrating auditory and visual information.


Assuntos
Percepção Auditiva , Ilusões , Córtex Visual/fisiologia , Percepção Visual , Feminino , Humanos , Masculino
8.
Ear Hear ; 33(1): 144-50, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-21760513

RESUMO

OBJECTIVE: To reduce stimulus transduction artifacts in EEG while using insert earphones. DESIGN: Reference Equivalent Threshold SPLs were assessed for Etymotic ER-4B earphones in 15 volunteers. Auditory brainstem responses (ABRs) and middle latency responses (MLRs)-as well as long-duration complex ABRs-to click and /dα/ speech stimuli were recorded in a single-case design. RESULTS: Transduction artifacts occurred in raw EEG responses, but they were eliminated by shielding, counter-phasing (averaging across stimuli 180° out of phase), or rereferencing. CONCLUSIONS: Clinical grade ABRs, MLRs, and cABRs can be recorded with a standard digital EEG system and high-fidelity insert earphones, provided one or more techniques are used to remove the stimulus transduction artifact.


Assuntos
Estimulação Acústica , Artefatos , Eletroencefalografia , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Audição/fisiologia , Estimulação Acústica/instrumentação , Estimulação Acústica/métodos , Estimulação Acústica/normas , Adolescente , Adulto , Eletroencefalografia/instrumentação , Eletroencefalografia/métodos , Eletroencefalografia/normas , Eletroculografia/métodos , Eletroculografia/normas , Feminino , Transtornos da Audição/diagnóstico , Transtornos da Audição/fisiopatologia , Humanos , Masculino , Reprodutibilidade dos Testes , Transdutores/normas , Adulto Jovem
9.
Neuroimage ; 60(1): 530-8, 2012 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-22178454

RESUMO

When speech is interrupted by noise, listeners often perceptually "fill-in" the degraded signal, giving an illusion of continuity and improving intelligibility. This phenomenon involves a neural process in which the auditory cortex (AC) response to onsets and offsets of acoustic interruptions is suppressed. Since meaningful visual cues behaviorally enhance this illusory filling-in, we hypothesized that during the illusion, lip movements congruent with acoustic speech should elicit a weaker AC response to interruptions relative to static (no movements) or incongruent visual speech. AC response to interruptions was measured as the power and inter-trial phase consistency of the auditory evoked theta band (4-8 Hz) activity of the electroencephalogram (EEG) and the N1 and P2 auditory evoked potentials (AEPs). A reduction in the N1 and P2 amplitudes and in theta phase-consistency reflected the perceptual illusion at the onset and/or offset of interruptions regardless of visual condition. These results suggest that the brain engages filling-in mechanisms throughout the interruption, which repairs degraded speech lasting up to ~250 ms following the onset of the degradation. Behaviorally, participants perceived speech continuity over longer interruptions for congruent compared to incongruent or static audiovisual streams. However, this specific behavioral profile was not mirrored in the neural markers of interest. We conclude that lip-reading enhances illusory perception of degraded speech not by altering the quality of the AC response, but by delaying it during degradations so that longer interruptions can be tolerated.


Assuntos
Córtex Auditivo/fisiologia , Eletroencefalografia , Ilusões/fisiologia , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Masculino
10.
J Neurosci ; 30(2): 620-8, 2010 Jan 13.
Artigo em Inglês | MEDLINE | ID: mdl-20071526

RESUMO

Normal listeners possess the remarkable perceptual ability to select a single speech stream among many competing talkers. However, few studies of selective attention have addressed the unique nature of speech as a temporally extended and complex auditory object. We hypothesized that sustained selective attention to speech in a multitalker environment would act as gain control on the early auditory cortical representations of speech. Using high-density electroencephalography and a template-matching analysis method, we found selective gain to the continuous speech content of an attended talker, greatest at a frequency of 4-8 Hz, in auditory cortex. In addition, the difference in alpha power (8-12 Hz) at parietal sites across hemispheres indicated the direction of auditory attention to speech, as has been previously found in visual tasks. The strength of this hemispheric alpha lateralization, in turn, predicted an individual's attentional gain of the cortical speech signal. These results support a model of spatial speech stream segregation, mediated by a supramodal attention mechanism, enabling selection of the attended representation in auditory cortex.


Assuntos
Atenção/fisiologia , Córtex Auditivo/fisiologia , Inteligibilidade da Fala/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica/métodos , Adolescente , Adulto , Análise de Variância , Mapeamento Encefálico , Sinais (Psicologia) , Análise Discriminante , Eletroencefalografia/métodos , Feminino , Lateralidade Funcional/fisiologia , Humanos , Masculino , Psicolinguística , Tempo de Reação/fisiologia , Localização de Som , Adulto Jovem
11.
J Neurophysiol ; 94(5): 3314-24, 2005 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-16033933

RESUMO

The neural mechanism that mediates perceptual filling-in of the blind spot is still under discussion. One hypothesis proposes that the cortical representation of the blind spot is activated only under conditions that elicit perceptual filling-in and requires congruent stimulation on both sides of the blind spot. Alternatively, the passive remapping hypothesis proposes that inputs from regions surrounding the blind spot infiltrate the representation of the blind spot in cortex. This theory predicts that independent stimuli presented to the left and right of the blind spot should lead to neighboring/overlapping activations in visual cortex when the blind-spot eye is stimulated but separated activations when the fellow eye is stimulated. Using functional MRI, we directly tested the remapping hypothesis by presenting flickering checkerboard wedges to the left or right of the spatial location of the blind spot, either to the blind-spot eye or to the fellow eye. Irrespective of which eye was stimulated, we found separate activations corresponding to the left and right wedges. We identified the centroid of the activations on a cortical flat map and measured the distance between activations. Distance measures of the cortical gap across the blind spot were accurate and reliable (mean distance: 6-8 mm across subjects, SD approximately 1 mm within subjects). Contrary to the predictions of the remapping hypothesis, cortical distances between activations to the two wedges were equally large for the blind-spot eye and fellow eye in areas V1 and V2/V3. Remapping therefore appears unlikely to account for perceptual filling-in at an early cortical level.


Assuntos
Mapeamento Encefálico/métodos , Potenciais Evocados Visuais/fisiologia , Rede Nervosa/fisiologia , Disco Óptico/fisiologia , Córtex Visual/fisiologia , Campos Visuais/fisiologia , Percepção Visual/fisiologia , Adulto , Humanos , Masculino , Estimulação Luminosa/métodos , Vias Visuais/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...